Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(3): e0299545, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38466693

RESUMO

Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.


Assuntos
Artrite , Aprendizado Profundo , Doenças Musculoesqueléticas , Humanos , Ombro/diagnóstico por imagem , Raios X , Serviço Hospitalar de Emergência
2.
J Mol Model ; 30(3): 62, 2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38321301

RESUMO

CONTEXT: The abilities of Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 as catalysts for N2-RR to create the NH3 are investigated by theoretical levels. The ∆Eadoption and ∆Eformation of Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 are investigated. The ∆Eadsorption of N2-RR intermediates and ΔGreaction of reaction steps of N2-RR on Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 are examined. In acceptable mechanisms, the *NN → *NNH step is potential limiting step and *NN → *NNH step in enzymatic mechanism is endothermic reaction. The ∆Greaction of *NHNH2 → *NH2NH2 step on Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 are -0.904, -0.928, -0.860, -0.882, -0.817 and -0.838 eV, respectively. The Co-Al18P18 and Ni-Al21N21 have the highest ∆Greaction values for reaction steps of N2-RR. Finally, it can be concluded that the Co-Al18P18, Ni-Al21N21, Fe-B24N24 and Mn-B27P2 have acceptable potential for N2-RR by acceptable pathways. METHODS: The structures of Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 and N2-RR intermediates are optimized by PW91PW91/6-311+G (2d, 2p) and M06-2X/cc-pVQZ as theoretical levels in GAMESS software. The convergence for force set displacement of Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 and N2-RR intermediates are 1.5 × 105 Hartree/Bohr and 6.0 × 10-5 Angstrom. The Opt = Tight and MaxStep = 30 are considered to optimize Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 and N2-RR intermediates. The frequencies of Co-Al18P18, Ni-Al21N21, Fe-B24N24, Mn-B27P27, Ti-C60 and Cu-Si72 and N2-RR intermediates are calculated.

3.
Anal Methods ; 16(9): 1306-1322, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38344759

RESUMO

Electrochemical techniques are commonly used to analyze and screen various environmental pathogens. When used in conjunction with other optical recognition methods, it can extend the sensing range, lower the detection limit, and offer mutual validation. Nowadays, electrochemical-optical dual-mode biosensors have ensured the accuracy of test results by integrating two signals into one, indicating their potential use in primary food safety quantitative assays and screening tests. Particularly, visible optical signals from electrochemical/colorimetric dual-mode biosensors could meet the demand for real-time screening of microbial pathogens. While electrochemical-optical dual-mode probes have been receiving increasing attention, there is limited emphasis on the design approaches for sensors intended for microbial pathogens. Here, we review the recent progress in the merging of optical and electrochemical techniques, including fluorescence, colorimetry, surface plasmon resonance (SPR), and surface enhanced Raman spectroscopy (SERS). This study particularly emphasizes the reporting of various sensing performances, including sensing principles, types, cutting-edge design approaches, and applications. Finally, some concerns and upcoming advancements in dual-mode probes are briefly outlined.


Assuntos
Técnicas Biossensoriais , Técnicas Biossensoriais/métodos , Ressonância de Plasmônio de Superfície/métodos , Técnicas Eletroquímicas/métodos , Inocuidade dos Alimentos , Colorimetria
4.
Artigo em Inglês | MEDLINE | ID: mdl-38082960

RESUMO

The main challenge in adopting deep learning models is limited data for training, which can lead to poor generalization and a high risk of overfitting, particularly when detecting forearm abnormalities in X-ray images. Transfer learning from ImageNet is commonly used to address these issues. However, this technique is ineffective for grayscale medical imaging because of a mismatch between the learned features. To mitigate this issue, we propose a domain adaptation deep TL approach that involves training six pre-trained ImageNet models on a large number of X-ray images from various body parts, then fine-tuning the models on a target dataset of forearm X-ray images. Furthermore, the feature fusion technique combines the extracted features with deep neural models to train machine learning classifiers. Gradient-based class activation heat map (Grad CAM) was used to verify the accuracy of our results. This method allows us to see which parts of an image the model uses to make its classification decisions. The statically results and Grad CAM have shown that the proposed TL approach is able to alleviate the domain mismatch problem and is more accurate in their decision-making compared to models that were trained using the ImageNet TL technique, achieving an accuracy of 90.7%, an F1-score of 90.6%, and a Cohen's kappa of 81.3%. These results indicate that the proposed approach effectively improved the performance of the employed models individually and with the fusion technique. It helped to reduce the domain mismatch between the source of TL and the target task.


Assuntos
Aprendizado Profundo , Raios X , Antebraço/diagnóstico por imagem , Aprendizado de Máquina , Radiografia
5.
Sensors (Basel) ; 23(18)2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37766022

RESUMO

Multiple Input and Multiple Output (MIMO) is a promising technology to enable spatial multiplexing and improve throughput in wireless communication networks. To obtain the full benefits of MIMO systems, the Channel State Information (CSI) should be acquired correctly at the transmitter side for optimal beamforming design. The analytical centre-cutting plane method (ACCPM) has shown to be an appealing way to obtain the CSI at the transmitter side. This paper adopts ACCPM to learn down-link CSI in both single-user and multi-user scenarios. In particular, during the learning phase, it uses the null space beamforming vector of the estimated CSI to reduce the power usage, which approaches zero when the learned CSI approaches the optimal solution. Simulation results show our proposed method converges and outperforms previous studies. The effectiveness of the proposed method was corroborated by applying it to the scattering channel and winner II channel models.

6.
Cancers (Basel) ; 15(15)2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37568821

RESUMO

Medical image classification poses significant challenges in real-world scenarios. One major obstacle is the scarcity of labelled training data, which hampers the performance of image-classification algorithms and generalisation. Gathering sufficient labelled data is often difficult and time-consuming in the medical domain, but deep learning (DL) has shown remarkable performance, although it typically requires a large amount of labelled data to achieve optimal results. Transfer learning (TL) has played a pivotal role in reducing the time, cost, and need for a large number of labelled images. This paper presents a novel TL approach that aims to overcome the limitations and disadvantages of TL that are characteristic of an ImageNet dataset, which belongs to a different domain. Our proposed TL approach involves training DL models on numerous medical images that are similar to the target dataset. These models were then fine-tuned using a small set of annotated medical images to leverage the knowledge gained from the pre-training phase. We specifically focused on medical X-ray imaging scenarios that involve the humerus and wrist from the musculoskeletal radiographs (MURA) dataset. Both of these tasks face significant challenges regarding accurate classification. The models trained with the proposed TL were used to extract features and were subsequently fused to train several machine learning (ML) classifiers. We combined these diverse features to represent various relevant characteristics in a comprehensive way. Through extensive evaluation, our proposed TL and feature-fusion approach using ML classifiers achieved remarkable results. For the classification of the humerus, we achieved an accuracy of 87.85%, an F1-score of 87.63%, and a Cohen's Kappa coefficient of 75.69%. For wrist classification, our approach achieved an accuracy of 85.58%, an F1-score of 82.70%, and a Cohen's Kappa coefficient of 70.46%. The results demonstrated that the models trained using our proposed TL approach outperformed those trained with ImageNet TL. We employed visualisation techniques to further validate these findings, including a gradient-based class activation heat map (Grad-CAM) and locally interpretable model-independent explanations (LIME). These visualisation tools provided additional evidence to support the superior accuracy of models trained with our proposed TL approach compared to those trained with ImageNet TL. Furthermore, our proposed TL approach exhibited greater robustness in various experiments compared to ImageNet TL. Importantly, the proposed TL approach and the feature-fusion technique are not limited to specific tasks. They can be applied to various medical image applications, thus extending their utility and potential impact. To demonstrate the concept of reusability, a computed tomography (CT) case was adopted. The results obtained from the proposed method showed improvements.

7.
Int J Telemed Appl ; 2023: 7741735, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37168809

RESUMO

The significance of deep learning techniques in relation to steady-state visually evoked potential- (SSVEP-) based brain-computer interface (BCI) applications is assessed through a systematic review. Three reliable databases, PubMed, ScienceDirect, and IEEE, were considered to gather relevant scientific and theoretical articles. Initially, 125 papers were found between 2010 and 2021 related to this integrated research field. After the filtering process, only 30 articles were identified and classified into five categories based on their type of deep learning methods. The first category, convolutional neural network (CNN), accounts for 70% (n = 21/30). The second category, recurrent neural network (RNN), accounts for 10% (n = 3/30). The third and fourth categories, deep neural network (DNN) and long short-term memory (LSTM), account for 6% (n = 30). The fifth category, restricted Boltzmann machine (RBM), accounts for 3% (n = 1/30). The literature's findings in terms of the main aspects identified in existing applications of deep learning pattern recognition techniques in SSVEP-based BCI, such as feature extraction, classification, activation functions, validation methods, and achieved classification accuracies, are examined. A comprehensive mapping analysis was also conducted, which identified six categories. Current challenges of ensuring trustworthy deep learning in SSVEP-based BCI applications were discussed, and recommendations were provided to researchers and developers. The study critically reviews the current unsolved issues of SSVEP-based BCI applications in terms of development challenges based on deep learning techniques and selection challenges based on multicriteria decision-making (MCDM). A trust proposal solution is presented with three methodology phases for evaluating and benchmarking SSVEP-based BCI applications using fuzzy decision-making techniques. Valuable insights and recommendations for researchers and developers in the SSVEP-based BCI and deep learning are provided.

8.
Diagnostics (Basel) ; 13(10)2023 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-37238174

RESUMO

Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97-100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with 213 eyes examined in Iraq and obtained AUCs of 0.91-0.92 and an accuracy range of 88-92%. The proposed model is a step toward improving the detection of clinical and subclinical forms of KCN.

9.
Sensors (Basel) ; 23(7)2023 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-37050795

RESUMO

Concept drift (CD) in data streaming scenarios such as networking intrusion detection systems (IDS) refers to the change in the statistical distribution of the data over time. There are five principal variants related to CD: incremental, gradual, recurrent, sudden, and blip. Genetic programming combiner (GPC) classification is an effective core candidate for data stream classification for IDS. However, its basic structure relies on the usage of traditional static machine learning models that receive onetime training, limiting its ability to handle CD. To address this issue, we propose an extended variant of the GPC using three main components. First, we replace existing classifiers with alternatives: online sequential extreme learning machine (OSELM), feature adaptive OSELM (FA-OSELM), and knowledge preservation OSELM (KP-OSELM). Second, we add two new components to the GPC, specifically, a data balancing and a classifier update. Third, the coordination between the sub-models produces three novel variants of the GPC: GPC-KOS for KA-OSELM; GPC-FOS for FA-OSELM; and GPC-OS for OSELM. This article presents the first data stream-based classification framework that provides novel strategies for handling CD variants. The experimental results demonstrate that both GPC-KOS and GPC-FOS outperform the traditional GPC and other state-of-the-art methods, and the transfer learning and memory features contribute to the effective handling of most types of CD. Moreover, the application of our incremental variants on real-world datasets (KDD Cup '99, CICIDS-2017, CSE-CIC-IDS-2018, and ISCX '12) demonstrate improved performance (GPC-FOS in connection with CSE-CIC-IDS-2018 and CICIDS-2017; GPC-KOS in connection with ISCX2012 and KDD Cup '99), with maximum accuracy rates of 100% and 98% by GPC-KOS and GPC-FOS, respectively. Additionally, our GPC variants do not show superior performance in handling blip drift.

10.
Plants (Basel) ; 12(8)2023 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-37111827

RESUMO

The current methods of classifying plant disease images are mainly affected by the training phase and the characteristics of the target dataset. Collecting plant samples during different leaf life cycle infection stages is time-consuming. However, these samples may have multiple symptoms that share the same features but with different densities. The manual labelling of such samples demands exhaustive labour work that may contain errors and corrupt the training phase. Furthermore, the labelling and the annotation consider the dominant disease and neglect the minor disease, leading to misclassification. This paper proposes a fully automated leaf disease diagnosis framework that extracts the region of interest based on a modified colour process, according to which syndrome is self-clustered using an extended Gaussian kernel density estimation and the probability of the nearest shared neighbourhood. Each group of symptoms is presented to the classifier independently. The objective is to cluster symptoms using a nonparametric method, decrease the classification error, and reduce the need for a large-scale dataset to train the classifier. To evaluate the efficiency of the proposed framework, coffee leaf datasets were selected to assess the framework performance due to a wide variety of feature demonstrations at different levels of infections. Several kernels with their appropriate bandwidth selector were compared. The best probabilities were achieved by the proposed extended Gaussian kernel, which connects the neighbouring lesions in one symptom cluster, where there is no need for any influencing set that guides toward the correct cluster. Clusters are presented with an equal priority to a ResNet50 classifier, so misclassification is reduced with an accuracy of up to 98%.

11.
Life (Basel) ; 13(3)2023 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-36983845

RESUMO

Big-medical-data classification and image detection are crucial tasks in the field of healthcare, as they can assist with diagnosis, treatment planning, and disease monitoring. Logistic regression and YOLOv4 are popular algorithms that can be used for these tasks. However, these techniques have limitations and performance issue with big medical data. In this study, we presented a robust approach for big-medical-data classification and image detection using logistic regression and YOLOv4, respectively. To improve the performance of these algorithms, we proposed the use of advanced parallel k-means pre-processing, a clustering technique that identified patterns and structures in the data. Additionally, we leveraged the acceleration capabilities of a neural engine processor to further enhance the speed and efficiency of our approach. We evaluated our approach on several large medical datasets and showed that it could accurately classify large amounts of medical data and detect medical images. Our results demonstrated that the combination of advanced parallel k-means pre-processing, and the neural engine processor resulted in a significant improvement in the performance of logistic regression and YOLOv4, making them more reliable for use in medical applications. This new approach offers a promising solution for medical data classification and image detection and may have significant implications for the field of healthcare.

12.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850457

RESUMO

An intelligent remote prioritization for patients with high-risk multiple chronic diseases is proposed in this research, based on emotion and sensory measurements and multi-criteria decision making. The methodology comprises two phases: (1) a case study is discussed through the adoption of a multi-criteria decision matrix for high-risk level patients; (2) the technique for reorganizing opinion order to interval levels (TROOIL) is modified by combining it with an extended fuzzy-weighted zero-inconsistency (FWZIC) method over fractional orthotriple fuzzy sets to address objective weighting issues associated with the original TROOIL. In the first hierarchy level, chronic heart disease is identified as the most important criterion, followed by emotion-based criteria in the second. The third hierarchy level shows that Peaks is identified as the most important sensor-based criterion and chest pain as the most important emotion criterion. Low blood pressure disease is identified as the most important criterion for patient prioritization, with the most severe cases being prioritized. The results are evaluated using systematic ranking and sensitivity analysis.


Assuntos
Cardiopatias , Hipotensão , Humanos , Emoções , Inteligência , Pacientes
13.
PeerJ Comput Sci ; 7: e715, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34722871

RESUMO

Transfer learning (TL) has been widely utilized to address the lack of training data for deep learning models. Specifically, one of the most popular uses of TL has been for the pre-trained models of the ImageNet dataset. Nevertheless, although these pre-trained models have shown an effective performance in several domains of application, those models may not offer significant benefits in all instances when dealing with medical imaging scenarios. Such models were designed to classify a thousand classes of natural images. There are fundamental differences between these models and those dealing with medical imaging tasks regarding learned features. Most medical imaging applications range from two to ten different classes, where we suspect that it would not be necessary to employ deeper learning models. This paper investigates such a hypothesis and develops an experimental study to examine the corresponding conclusions about this issue. The lightweight convolutional neural network (CNN) model and the pre-trained models have been evaluated using three different medical imaging datasets. We have trained the lightweight CNN model and the pre-trained models with two scenarios which are with a small number of images once and a large number of images once again. Surprisingly, it has been found that the lightweight model trained from scratch achieved a more competitive performance when compared to the pre-trained model. More importantly, the lightweight CNN model can be successfully trained and tested using basic computational tools and provide high-quality results, specifically when using medical imaging datasets.

14.
J Big Data ; 8(1): 53, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33816053

RESUMO

In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

15.
Cancers (Basel) ; 13(7)2021 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-33808207

RESUMO

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.

16.
Plants (Basel) ; 9(10)2020 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-33019765

RESUMO

Deep learning (DL) represents the golden era in the machine learning (ML) domain, and it has gradually become the leading approach in many fields. It is currently playing a vital role in the early detection and classification of plant diseases. The use of ML techniques in this field is viewed as having brought considerable improvement in cultivation productivity sectors, particularly with the recent emergence of DL, which seems to have increased accuracy levels. Recently, many DL architectures have been implemented accompanying visualisation techniques that are essential for determining symptoms and classifying plant diseases. This review investigates and analyses the most recent methods, developed over three years leading up to 2020, for training, augmentation, feature fusion and extraction, recognising and counting crops, and detecting plant diseases, including how these methods can be harnessed to feed deep classifiers and their effects on classifier accuracy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...